{ "cells": [ { "cell_type": "code", "execution_count": null, "id": "07a0a926", "metadata": {}, "outputs": [], "source": [ "import math\n", "import numpy as np\n", "import matplotlib.pyplot as plt" ] }, { "cell_type": "markdown", "id": "8a0c2f6e", "metadata": {}, "source": [ "# Homework 1" ] }, { "cell_type": "markdown", "id": "9992241e", "metadata": {}, "source": [ "In this homework you will implement a numerical approach to gradient descent and use it to implement the perceptron algorithm. That will take place in stages described below. But we'll start by describing how to perform gradient descent numerically." ] }, { "cell_type": "markdown", "id": "f5393ce8", "metadata": {}, "source": [ "### The method of finite differences\n", "\n", "Given a function $f(x)$, the analytical approach to gradient descent - i.e., to finding the value of $x$ that minimizes $f(x)$ - is to compute $f'(x)$ and iterate as follows, where $\\alpha \\in (0, 1]$ is the learning rate:\n", "* x = 0\n", "* while not converged:\n", " * $x = x - \\alpha f'(x)$\n", " \n", "If you cannot compute $f'(x)$ analytically, you can estimate it as follows, for sufficiently small $\\epsilon$: \n", "\n", "$f'(x) \\approx \\frac{f(x + \\epsilon) - f(x - \\epsilon)}{2\\epsilon}$\n", "\n", "The *method of finite differences* makes the assumption that over very small intervals the function behaves linearly and the slope of the tangent line can be estimated as the difference between function values at the ends of the interval divided by the interval width.\n", "\n", "For example, the slope of $f(x) = x^2$ at $x = 1$ can be estimated as $f'(1) = \\frac{(1 + 0.001)^2 - (1 - 0.001)^2}{2 * 0.001}$." ] }, { "cell_type": "code", "execution_count": null, "id": "98418213", "metadata": {}, "outputs": [], "source": [ "e = 0.001\n", "(math.pow(1 + e, 2) - math.pow(1 - e, 2))/(2*e)" ] }, { "cell_type": "markdown", "id": "302343fd", "metadata": {}, "source": [ "Note that the number above is very close to what you would get analytically by taking derivatives: $f'(x) = 2x$ so $f'(1) = 2*1 = 2$.\n", "\n", "You'll use the method of finite differences to compute the derivative of a loss function with respect to the weights of a perceptron." ] }, { "cell_type": "markdown", "id": "e69a08ef", "metadata": {}, "source": [ "Below is a simple implementation of the method of finite differences for a univariate function. It is overly simple, running for a fixed number of iterations, and assuming constants for $\\epsilon$ and $\\alpha$, but it works for simple cases." ] }, { "cell_type": "code", "execution_count": null, "id": "d69740bf", "metadata": {}, "outputs": [], "source": [ "def fd_demo(f, x0):\n", " e = 0.001\n", " a = 0.01\n", " x = x0\n", " for _ in range(1000):\n", " g = (f(x + e) - f(x - e)) / (2*e)\n", " x = x - a * g\n", " return x" ] }, { "cell_type": "code", "execution_count": null, "id": "30421acd", "metadata": {}, "outputs": [], "source": [ "# x*x is minimized at 0\n", "f = lambda x: x * x\n", "fd_demo(f, 2)" ] }, { "cell_type": "code", "execution_count": null, "id": "47ef2ab4", "metadata": {}, "outputs": [], "source": [ "# sin(x) is minimized at lots of points, the closest to 2 is 3pi/2\n", "f = lambda x: math.sin(x)\n", "print(fd_demo(f, 2))\n", "print(math.pi * 3 / 2)" ] }, { "cell_type": "markdown", "id": "8f43f0e4", "metadata": {}, "source": [ "### Perceptron class\n", "\n", "Below is a simple Perceptron class that you can use in this assignment. Feel free to use it as is, make changes, or throw it out and write your own. It creates a weight vector with small random numbers when initialized, and has methods to compute the activation for an input and to produce a binary class label for an input.\n", "\n", "Note that the class below does not maintain an explicit bias term $b$. You can add one or, better yet, make sure that all inputs, $x$, have a 1 in one of the positions." ] }, { "cell_type": "code", "execution_count": null, "id": "4cf39bfb", "metadata": {}, "outputs": [], "source": [ "class Perceptron:\n", " \n", " def __init__(self, n):\n", " \"\"\"\n", " n (int) - The length of x vectors that the classifier will process.\n", " \"\"\"\n", " \n", " # Start with weights uniformly chosen from the range [-0.5, 0.5]\n", " self.weights = (np.random.rand(n) - 0.5).tolist()\n", " \n", " def activation(self, x):\n", " return np.dot(x, self.weights)\n", " \n", " def predict(self, x):\n", " return 1 if self.activation(x) > 0 else -1\n", " " ] }, { "cell_type": "markdown", "id": "48e92616", "metadata": {}, "source": [ "### Task 1: The hinge loss\n", "\n", "Fill in the function below to compute the hinge loss. Recall that the hinge loss is 0 if the activation and the class label have the same sign, otherwise it is -y * activation." ] }, { "cell_type": "code", "execution_count": null, "id": "69adecc9", "metadata": {}, "outputs": [], "source": [ "def loss_hinge(y, activation):\n", " pass" ] }, { "cell_type": "markdown", "id": "afe259c2", "metadata": {}, "source": [ "### Task 2: One step of gradient descent\n", "\n", "Fill in the function below which takes the following arguments:\n", "* clf - An instance of the perceptron class above\n", "* x - A training instance\n", "* y - The corresponding true class label\n", "* learning_rate - A learning rate in the range (0, 1]\n", "* loss_fn - A function that takes as input the true class label and the activation and returns a real number which is the loss on that example\n", "* epsilon - The delta to use when using the method of finite differences\n", "\n", "The function must return a new set of weights to use in the perceptron after performing one step of gradient descent update using the training example and loss function. To do that it will:\n", "* Loop over each of the weights\n", "* Compute the partial derivative of the loss with respect to the weight using the method of finite differences\n", "* Use the computed gradient (the list of partials with respect to each of the weights) to compute a new weight vector as $w = w - \\alpha g$ where $w$ is the weight vector, $\\alpha$ is the learning rate, and $g$ is the computed gradient.\n", "* Return the new weight vector\n", "\n", "**Note:** Be careful not to modify the weights of the preceptron in place in the routine below. " ] }, { "cell_type": "code", "execution_count": null, "id": "20fb54a5", "metadata": {}, "outputs": [], "source": [ "def gd_step(clf, x, y, learning_rate, loss_fn, epsilon = 0.001):\n", " pass" ] }, { "cell_type": "markdown", "id": "182ca040", "metadata": {}, "source": [ "### Some training data\n", "\n", "The code below generates a simple 2D dataset of n positive examples followed by n negative examples. The cell after that plots them. The code also prepends a 1 in each example so that the bias term will simply correspond to the first weight." ] }, { "cell_type": "code", "execution_count": null, "id": "1e0e5abd", "metadata": {}, "outputs": [], "source": [ "n = 10\n", "X = np.concatenate((np.random.rand(n, 2) + 1,\n", " np.random.rand(n, 2)))\n", "X = np.hstack((np.expand_dims(np.ones(2*n), 1), X))\n", "Y = [1] * n + [-1] * n\n", "colors = c = ['r'] * n + ['g'] * n\n", "\n", "# Randomize the order of the instances just for fun\n", "rng = np.random.default_rng()\n", "state = rng.__getstate__()\n", "rng.shuffle(X)\n", "rng.__setstate__(state)\n", "rng.shuffle(Y)\n", "rng.__setstate__(state)\n", "rng.shuffle(colors)" ] }, { "cell_type": "code", "execution_count": null, "id": "95e8d2b7", "metadata": { "scrolled": true }, "outputs": [], "source": [ "plt.scatter(X[:,1], X[:,2], c = colors)" ] }, { "cell_type": "markdown", "id": "30f70899", "metadata": {}, "source": [ "### Task 3: Full gradient descent\n", "\n", "If you've done everthing above correctly, the code below will perform gradient descent to train the classifier.\n", "\n", "Modify this code so that it runs until one epoch produces no classification errors rather than running for a fixed number of iterations." ] }, { "cell_type": "code", "execution_count": null, "id": "6671de9a", "metadata": {}, "outputs": [], "source": [ "clf = Perceptron(3)\n", "\n", "for epoch in range(100):\n", " for x, y in zip(X, Y):\n", " clf.weights = gd_step(clf, x, y, 0.01, loss_hinge)\n", "\n", "print(clf.weights)" ] }, { "cell_type": "markdown", "id": "6b4f343b", "metadata": {}, "source": [ "### Task 4: Plot some hyperplanes\n", "\n", "Run full gradient descent 5 times and write a routine to convert the weights into slope/intercept form. Then use the function below to plot the hyperplanes learned by the perceptron along with the data in one graph. The second cell below shows how that can be done. Write a paragraph explaining what you see in the plot, touching on how much variation there is from run to run and whether the separators seem like \"good\" ones." ] }, { "cell_type": "code", "execution_count": null, "id": "f55bc1ed", "metadata": {}, "outputs": [], "source": [ "def weights_to_slope_intercept(weights):\n", " pass" ] }, { "cell_type": "code", "execution_count": null, "id": "b9d6f313", "metadata": {}, "outputs": [], "source": [ "def abline(slope, intercept):\n", " \"\"\"Plot a line from slope and intercept\"\"\"\n", " axes = plt.gca()\n", " x_vals = np.array(axes.get_xlim())\n", " y_vals = intercept + slope * x_vals\n", " plt.plot(x_vals, y_vals, '--')" ] }, { "cell_type": "code", "execution_count": null, "id": "db3ee89d", "metadata": {}, "outputs": [], "source": [ "plt.xlim([0, 2])\n", "plt.ylim([0, 2])\n", "plt.scatter(X[:,1], X[:,2], c = colors)\n", "abline(-1, 2)\n", "abline(2, 0)" ] }, { "cell_type": "markdown", "id": "abc138f2", "metadata": {}, "source": [ "### Task 5: Do task 4 again with a new loss\n", "\n", "Repeat The previous task, generating 5 plots of the separators correspond to hyperplanes, but this time use the loss function below.\n", "\n", "How are the results different? Write a brief paragraph explaining why the results look different." ] }, { "cell_type": "code", "execution_count": null, "id": "0286d066", "metadata": {}, "outputs": [], "source": [ "def loss_hinge_margin(y, activation):\n", " return 0 if y * activation > 1 else -y * activation + 1" ] }, { "cell_type": "code", "execution_count": null, "id": "4bd59176", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.8" } }, "nbformat": 4, "nbformat_minor": 5 }